chiudere
chiudere
""
Il manuale AI Security
Questo manuale esplora sei sfide fondamentali per la sicurezza che le organizzazioni devono affrontare quando adottano l'intelligenza artificiale, insieme a strategie comprovate e concrete per affrontarle.
            Experience Netskope
            Prova direttamente la piattaforma Netskope
            Ecco la tua occasione per sperimentare in prima persona la piattaforma single-cloud di Netskope One. Iscriviti a laboratori pratici e a ritmo autonomo, unisciti a noi per dimostrazioni mensili di prodotti dal vivo, fai un test drive gratuito di Netskope Private Access o partecipa a workshop dal vivo guidati da istruttori.
              Un leader in SSE. Ora è un leader nel settore SASE a singolo fornitore.
              Netskope è riconosciuto come Leader Più Lontano in Visione sia per le piattaforme SSE che SASE
              2 volte leader nel Quadrante Magico di Gartner® per piattaforme SASE
              Una piattaforma unificata costruita per il tuo percorso
                ""
                Netskope One AI Security
                Le aziende hanno bisogno di un’AI sicura per far crescere il proprio business, ma i controlli e i guardrail non devono richiedere sacrifici in termini di velocità o esperienza d’uso.Netskope ti aiuta a dire di sì a tutti i vantaggi dell'AI.
                  ""
                  Netskope One AI Security
                  Le aziende hanno bisogno di un’AI sicura per far crescere il proprio business, ma i controlli e i guardrail non devono richiedere sacrifici in termini di velocità o esperienza d’uso.Netskope ti aiuta a dire di sì a tutti i vantaggi dell'AI.
                    eBook sulla Modern Data Loss Prevention (DLP) for Dummies
                    Modern Data Loss Prevention (DLP) for Dummies
                    Ricevi consigli e trucchi per passare a un DLP fornito dal cloud.
                      Modern SD-WAN for SASE Dummies Book
                      Modern SD-WAN for SASE Dummies
                      Smettila di inseguire la tua architettura di rete
                        Comprendere dove risiede il rischio
                        Advanced Analytics trasforma il modo in cui i team di operazioni di sicurezza applicano insight basati sui dati per implementare policy migliori. Con l'Advanced Analytics, puoi identificare tendenze, concentrarti sulle aree di interesse e utilizzare i dati per agire.
                            Supporto tecnico Netskope
                            Supporto tecnico Netskope
                            I nostri ingegneri di supporto qualificati sono dislocati in tutto il mondo e possiedono competenze diversificate in sicurezza cloud, networking, virtualizzazione, content delivery e sviluppo software, garantendo un'assistenza tecnica tempestiva e di qualità.
                              Video Netskope
                              Formazione Netskope
                              La formazione Netskope ti aiuterà a diventare un esperto di sicurezza cloud. Siamo qui per aiutarti a proteggere il tuo percorso di trasformazione digitale e a sfruttare al meglio le tue applicazioni cloud, web e private.

                                AI Assistants: A New Challenge for CISOs

                                Mar 27 2024

                                Over the past year, AI innovation has swept through the workplace. Across industries and all team functions, we are seeing employees using AI assistants to streamline various tasks, including taking minutes, writing emails, developing code, crafting marketing strategies and even helping with managing company finances. As a CISO, I’m already envisaging an AI assistant which will help me with compliance strategy by actively monitoring regulatory changes, evaluating an organisation’s compliance status, and identifying areas for improvement. 

                                However, amidst all this enthusiasm, there is a very real challenge facing CISOs and DPOs: how to protect corporate data and IP from leakage through these generative AI platforms and to third party providers.

                                Curiosity won’t kill the CISO

                                While many enterprises have contemplated completely blocking these tools on their systems, to do so could limit innovation, create a culture of distrust toward those in the workforce, or even lead to “Shadow AI”—the unapproved use of third-party AI applications outside of the corporate network. To a certain extent, the horse has already bolted. Data shows that within enterprises, AI assistants are already integrated into day-to-day tasks. Writing assistant Grammarly, the second most popular generative AI app, is currently used by 3.1% of employees, and I’ve noticed around a third of the conference calls I attend now have an AI assistant on the guest list. With the increasing availability of AI assistants like Microsoft Copilot Motion, the researchers at Netskope Threat Labs are clear that they expect AI assistants to grow in popularity in 2024.

                                Instead of blocking the tools outright, CISOs can deploy continuous protection policies using intelligent Data Loss Prevention (DLP) tools to safely use AI applications. DLP tools can ensure no sensitive information is used within input queries to AI applications, protecting critical data and preventing unauthorised access, leaks, or misuse. 

                                CISOs should also take an active role in evaluating the applications used by employees, restricting access to those that do not align with business needs or pose an undue risk.

                                Once a CISO identifies an AI assistant as relevant to their organisation, the next step involves vetting the vendor and assessing its data policies. During this process, CISOs should equip themselves with an extensive list of questions, including: 

                                1. Data handling practices: What becomes of the data an employee inputs?

                                  Understanding how the vendor manages and protects the data is crucial for ensuring data privacy and security. A study by The World Economic Forum found that a staggering 95% of cybersecurity incidents stem from human error–and entrusting sensitive data to a third-party AI assistant can exacerbate this risk. 

                                  There’s even greater cause for pause; by feeding data into these tools, organisations may be inadvertently contributing to the training of potentially competitive AI models. This can lead to a scenario where proprietary information or insights about the organisation’s operations can be leveraged by competitors, posing significant risks to the organisation’s competitive advantage and market position.
                                1. Is the model used for additional services privately or publicly? Is the model developed by the company itself or based upon a third-party solution? 

                                  Many AI assistant apps used by employees often depend on third-party and fourth-party services. It’s common for employees to use apps without being aware that the backend infrastructure operates on a publicly accessible platform. As CISOs, we are particularly mindful of the significant costs associated with AI technology and so we know that free or inexpensive options make their money in other ways—selling data or the AI intelligence that it has contributed towards. In such cases, a thorough examination of the fine print becomes imperative for CISOs to ensure the protection and privacy of sensitive data. 
                                1. What happens to the output? Are these outputs employed to train subsequent models?

                                  Many AI vendors do not just use the data input to train their models—they also use the data output. This loop creates ever more tangled ways in which the apps could inadvertently expose sensitive company information or lead to copyright infringement—and can be hard to untangle in supply chain data protection planning. 

                                Looking within

                                As private enterprises await stronger legislative guidance on AI, it falls on CISOs and DPOs to promote self-regulation and ethical AI practices within their organisations. With the proliferation of AI assistants, it is crucial they act now to evaluate the implications of AI tools in the workplace. Every employee will soon be performing many day-to-day tasks in conjunction with their AI assistants. This should motivate companies to set up internal governance committees not just to evaluate tools and their applications, but to also discuss AI ethics, review processes, and discuss strategy in advance of more widespread adoption and regulation. This is exactly how we are approaching the challenge within the security teams here at Netskope; with an AI governance committee responsible for our AI strategy and who have built the mechanisms to properly inspect emerging vendors and their data processing approaches. 

                                Employees across all industries and all levels can benefit from an AI assistant, with Bill Gates saying “they will utterly change how we live our lives, online and off.” For CISOs, the key to unlocking their potential starts with responsible governance.

                                author image
                                Neil Thacker
                                Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union GDPR.
                                Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union GDPR.
                                Connettiti con Netskope

                                Iscriviti al blog di Netskope

                                Iscriviti per ricevere ogni mese una panoramica degli ultimi contenuti di Netskope direttamente nella tua casella di posta.